Multistage and Multistep Demo

The code is written in Julia.

This code is here to demonstrate how you use a "test problem" to setup and verify numerical methods. It is a very important component of the whole process of developing and implementing numerical methods.

A new nonlinear ODE:

Consider

$$ v'''(t) + v'(t) v(t) - \frac{\beta_1 + \beta_2 + \beta_3}{3} v'(t) =0, $$

where $\beta_1 < \beta_2 < \beta_3$. It follows that $$ v(t) = \beta_2 + (\beta_3 - \beta_2) \mathrm{cn}^2\left( \sqrt{ \frac{\beta_3 - \beta_1}{12}} t, \sqrt{\frac{\beta_3 - \beta_2}{\beta_3 - \beta_1}} \right) $$ is a solution where $\mathrm{cn}(x,k)$ is the Jacobi elliptic cosine function. Some notations use $\mathrm{cn}(x,m)$ where $m = k^2$ (see Wikipedia). The second argument of the cn function is called the elliptic modulus. The corresponding initial conditions are

$$ v(0) = \beta_3, \\ v'(0) = 0,\\ v''(0) = -\frac{(\beta_3 - \beta_1)(\beta_3-\beta_2)}{6}.$$

We first turn it into a system $$ u_1'(t) = v'(t) = u_2(t),\\ u_2'(t) = v''(t) = u_3(t),\\ u_3''(t) = \frac{\beta_1 + \beta_2 + \beta_3}{3} u_2(t) - u_2(t)u_1(t). $$

So, set $c = \frac{\beta_1 + \beta_2 + \beta_3}{3}$ $$ f(u) = \begin{bmatrix} u_2 \\ u_3 \\ u_2(c - u_1)\end{bmatrix}. $$

Because it will come back again, we have $$ D_u f(u) = \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ -u_2 & c - u_1 & 0 \end{bmatrix}. $$

Plotting the true solution

In Julia you need to call using Pkg; Pkg.add("Elliptic")

In Matlab you'd use

v = @(t) b2 + (b3-b2)*jacobiCN(sqrt((b3-b1)/12)*t,(b3-b2)/(b3-b1))^2

and in Python you would use

from scipy.special import *
v = lambda t: b2 + (b3-b2)*ellipj(sqrt((b3-b1)/12)*t,(b3-b2)/(b3-b1))[1]**2

Forward Euler

Even though the forward Euler method is not doing a good job, let's test its first-order accuracy. While we have yet to discuss this in detail, we will see that first-order local truncation errors (plus stability) gives first-order convergence at a fixed time, say $t = 10$.

We decrease $k$ by a factor of $2$ each iteration:

As we can see, the error decreases as a factor of approximately $2$ each time $k$ is reduced by a factor of $2$. This is first-order convergence.

Leapfrog (a 2-step method)

Note that starting with a first-order method still gives second-order convergence.

Trapezoid

At each time step we seek $U^{n+1}$ which solves

$$U^{n+1} - U^{n} = \frac{k}{2} \left( f(U^{n+1}) + f(U^{n}) \right).$$

So, we look for a zero of

$$ g(u,U^{n}) = u - U^{n} - \frac{k}{2} \left( f(u) + f(U^{n}) \right).$$

The Jacobian is given by

$$ D_u g(u) = I - \frac{k}{2} D_uf(u). $$

In Matlab and Python the identity matrix is constructed by eye(n). Julia handles the identity matrix in a different way. When you add eye(n) + A Matlab has to add (possibly zero) to every entriy of A, $O(n^2)$ complexity. Julia does this by just adding to the diagonal using the I object, $O(n)$ complexity.

Two-step Adams-Moulton (AM) method

This method is given by

$$ U^{n+2} = U^{n+1} + \frac{k}{12} \left( -f\left(U^n\right) + 8f\left(U^{n+1}\right) + 5f\left(U^{n+2}\right)\right). $$

Since this method is third order, we cannot start with Forward Euler as one step gives an error contribution of $O(k^2)$ which is, of course, much larger than the overall error of $O(k^3)$ that we expect. But we can start off with a second-order method. Let's choose the 2-stage second order Runge-Kutta method.

\begin{align*} U^* &= U^n + \frac{k}{2} f(U^n),\\ U^{n+1} &= U^n + k f(U^*). \end{align*}

Since this method is implicit, we need to set up our Jacobian for Newton's method:

4th-order Runge-Kutta (explicit, Example 5.13)

This gives the error reduction ratio:

$$\frac{\text{Error with time step } 2k}{\text{Error with time step } k}$$

For an $r$th-order method we should see this be approximately $2^r$.

Estimates with rounding errors for RK4